Back

BMC Medicine

Springer Science and Business Media LLC

Preprints posted in the last 90 days, ranked by how well they match BMC Medicine's content profile, based on 163 papers previously published here. The average preprint has a 0.23% match score for this journal, so anything above that is already an above-average fit.

1
Effect mechanisms of different malaria chemoprevention regimens in pregnancy on infant growth outcomes: causal mediation analysis of a randomized controlled trial

Nguyen, A. T.; Nankabirwa, J. I.; Kakuru, A.; Roh, M. E.; Aguti, M.; Adrama, H.; Kizza, J.; Olwoch, P.; Kamya, M. R.; Dorsey, G.; Jagannathan, P.; Benjamin-Chung, J.

2026-04-25 public and global health 10.64898/2026.04.17.26351121 medRxiv
Top 0.1%
32.9%
Show abstract

Introduction: Intermittent preventive treatment in pregnancy (IPTp) with sulfadoxine-pyrimethamine (SP) has become less effective at preventing malaria due to rising parasite resistance. IPTp with dihydroartemisinin-piperaquine (DP) alone or in combination with SP (DP+SP) dramatically lowers the risk of malaria in pregnancy compared to SP but is associated with lower birthweight and early life wasting. We estimated the effect of IPTp-DP, DP+SP, and SP on infant growth outcomes and assessed possible treatment mechanisms through a causal mediation analysis. Methods: We used infant follow-up data (N=761) from a trial (NCT04336189) that randomized pregnant women to receive monthly IPTp-DP, SP, or DP+SP. We compared weight-for-length (WLZ) and length-for-age (LAZ) z-scores between treatment arms. We assessed possible mediation through pregnancy, birth, and infancy factors using interventional indirect effect models. Results: Compared to IPTp-SP, IPTp-DP+SP decreased mean WLZ by 0.18 [95% confidence interval (CI) -0.03, 0.39] between 1-3 months and 0.28 (95% CI 0.07, 0.49) between 4-6 months, with the largest differences among primigravidae. Lower risk of active placental malaria in IPTp-DP+SP helped reduce differences in mean WLZ vs IPTp-SP (+0.06, 95% CI 0.02, 0.10). The IPTp-DP+SP arm had up to 0.28 lower mean LAZ between 7-13 months compared to IPTp-DP, particularly among children who were wasted between 0-6 months; low birthweight had a persistent, mediating effect on linear growth. Conclusion: Adverse birth outcomes contributed to early growth faltering among children born to mothers receiving IPTp-DP+SP vs IPTp-SP, but the prevention of placental malaria partially counteracted the negative effects of IPTp-DP+SP on ponderal growth.

2
Disruption and recovery of notifiable infectious diseases after COVID-19 in Australia, 2015-2025

Farquhar, H. L.

2026-02-17 public and global health 10.64898/2026.02.13.26346301 medRxiv
Top 0.1%
28.0%
Show abstract

BackgroundCOVID-19 non-pharmaceutical interventions (NPIs) disrupted transmission of many infectious diseases worldwide. While disruption patterns are well-documented, systematic analysis of post-pandemic recovery trajectories across diverse pathogens remains limited. We examined disruption and recovery of 47 nationally notifiable diseases in Australia from 2015 to 2025. MethodsWe analysed NNDSS surveillance data for 47 diseases across six transmission modes, quantifying disruption using observed-to-expected (O/E) ratios against 2015-2019 baselines. We applied difference-in-differences (DiD) to estimate causal NPI effects, Kaplan-Meier survival analysis for time-to-recovery, and bootstrap 95% confidence intervals for cumulative immunity debt. ResultsDuring 2020-2021, 28 diseases decreased (median O/E 0.51), with border-sensitive and vaccine-preventable diseases most affected. DiD analysis estimated that border closures were associated with significantly greater suppression among import-dependent diseases (coefficient -0.50, 95% CI -0.90 to -0.10, p=0.016). By 2025, recovery was heterogeneous: 17 diseases exceeded baseline levels, 12 returned to expected levels, 15 remained below baseline (9 partially recovered, 6 in sustained suppression), and 3 had insufficient data for trajectory classification. Five diseases showed suppression-then-overshoot trajectories suggestive of immunity debt, though bootstrap 95% confidence intervals confirmed statistically significant cumulative excess for only one (rotavirus); for influenza, high baseline variability precluded statistical confirmation despite a large absolute overshoot. ConclusionsPost-pandemic disease recovery in Australia is heterogeneous and incomplete. Fifteen of 47 diseases have not returned to baseline levels by 2025, while 17 exhibit overshoot. These findings argue for differentiated surveillance of still-suppressed diseases and targeted catch-up vaccination in pandemic birth cohorts. Article summaryWe analysed disruption and recovery of 47 nationally notifiable diseases in Australia from 2015 to 2025, finding that 15 diseases remain below pre-pandemic levels three years after NPI relaxation. Border closures caused disproportionate suppression of import-dependent diseases, and recovery trajectories varied by disease characteristics, with immunity debt statistically confirmed for only one of five candidate diseases.

3
The association between severity and aetiology of chronic liver disease and seasonal influenza vaccination uptake in adults: a retrospective cohort study using English primary care data (2019-2024)

Haeusler, I. L.; Etoori, D.; Campbell, C. N. J.; McDonald, S. L. R.; Lopez Bernal, J.; Mounier-Jack, S.; Kasstan-Dabush, B.; McDonald, H. I.; Parker, E. P. K.; Suffel, A.

2026-04-11 public and global health 10.64898/2026.04.08.26350434 medRxiv
Top 0.1%
26.6%
Show abstract

BackgroundIn England, individuals with chronic liver disease (CLD) are among those with the lowest seasonal influenza vaccine uptake despite being at elevated risk of severe influenza. We examined the relationship between CLD severity and aetiology, and influenza vaccine uptake in England. MethodsA retrospective cohort study of adults (18-115 years) using Clinical Practice Research Datalink Aurum primary care data was conducted for five seasons (2019/20-2023/24). Poisson regression was used to estimate rates of uptake by CLD severity (clinical diagnoses categorised as low, moderate, or severe) and aetiology (alcohol-related, viral-related, and diagnoses in the Green Book guidelines). FindingsThere were 182,174-277,470 with CLD per cohort. Among those who were additionally age-eligible for vaccination, uptake was 71{middle dot}1-79{middle dot}7% compared to 30{middle dot}9-40{middle dot}5% in those not additionally age-eligible. Among individuals below age eligibility without other comorbidities, severity was associated with higher uptake (incidence rate ratio [IRR] moderate 1{middle dot}80, 95% CI 1{middle dot}69-1{middle dot}90; severe 1{middle dot}95, 95% CI 1{middle dot}84-2{middle dot}08 in 2023/24); there was no effect in those with at least one additional comorbidity (moderate 1{middle dot}05, 95% CI 0{middle dot}99-1{middle dot}10; severe 1{middle dot}05, 95% CI 1{middle dot}01-1{middle dot}09). Alcohol- and viral-related aetiology were also associated with increased uptake in those not additionally age-eligible. Among individuals meeting age eligibility without additional comorbidities, severity was associated with a reduced uptake (moderate 0{middle dot}81, 95% CI 0{middle dot}73-0{middle dot}90; severe 0{middle dot}79, 95% CI 0{middle dot}74-0{middle dot}85), with attenuation in those with additional comorbidities (moderate 0{middle dot}99, 95% CI 0{middle dot}94-1{middle dot}04; severe 0{middle dot}91, 95% CI 0{middle dot}89-0{middle dot}94). InterpretationCLD severity and aetiology were important determinants of uptake in the absence of additional indications for influenza vaccination. Future research should prioritise understanding facilitators and barriers to vaccine uptake in individuals with CLD, particularly for those at highest risk of severe infection. FundingNIHR Health Protection Research Unit in Vaccines and Immunisation (NIHR200929/NIHR207408). Research in contextO_ST_ABSEvidence before this studyC_ST_ABSWe searched PubMed up to June 2025 using the terms "chronic liver disease", "cirrhosis", "hepatitis", "influenza vaccination", "seasonal influenza", and "vaccine uptake". Previous research, including national data from England, has shown that people with chronic liver disease tend to have lower seasonal influenza vaccine uptake than individuals with other medical comorbidities which qualify for vaccination such as diabetes, chronic kidney disease or immunosuppression. The reasons for low influenza vaccine uptake in people with chronic liver disease are not well understood, and it is therefore difficult for vaccination providers, principally primary care services in England, to tailor interventions aimed to increase uptake. Qualitative research involving individuals aged less than 65 years living in England with clinical risk comorbidities, most commonly diabetes, found that chronic disease management pathways inconsistently provided information about the importance of influenza vaccination as part of chronic disease management. Individuals with long-term conditions reported low perceived risk of influenza infection and limited awareness of vaccine benefits as important reasons for non-uptake. We hypothesised that the severity and aetiology of chronic liver disease may be important determinants of uptake. Added value of this studyWe conducted a population-based study to examine how chronic liver disease severity and aetiology influence seasonal influenza vaccine uptake in adults in England. Using primary care electronic health record data from five consecutive influenza seasons (2019/20-2023/24), we found that more severe chronic liver disease was associated with a substantial increase in vaccine uptake in those without additional indications for seasonal influenza vaccination (age-based eligibility or other qualifying clinical risk comorbidities). Alcohol- and viral-related aetiology were also associated with increased uptake in those who were not additionally age-eligible for vaccination. In contrast, severity, alcohol- and viral-related underlying aetiology were associated with a modest reduction in uptake for individuals with chronic liver disease who also qualified for vaccination due to age. Implications of all the available evidenceDespite clear clinical vulnerability to infection and a substantially elevated risk of morbidity and mortality following infection, a large proportion of adults with chronic liver disease, particularly those aged under 65 years, remain unvaccinated against seasonal influenza each year. This study suggests that chronic liver disease severity and underlying aetiology are important determinants of uptake in individuals not meeting age-based vaccine eligibility, particularly in those without additional clinical risk comorbidities. This could be because of differing perceptions of influenza risk, or due to varying degrees of interaction with healthcare specialists as part of chronic disease management. In individuals who met age-based vaccination eligibility, the negative effect of severity on influenza vaccine uptake may reflect greater barriers to accessing vaccination services by those with more complex health needs, or competing medical priorities for long-term condition management during consultations. To inform targeted vaccination strategies, future research should aim to understand the specific facilitators and barriers to influenza vaccination experienced by individuals with chronic liver disease. This should include perspectives of individuals with different disease severity, across different age groups, in those with and without additional co-morbidities.

4
Modelling the impact of long-acting monoclonal antibody, maternal vaccine and hybrid programs of RSV immunisation in temperate Western Australia

Giannini, F.; Hogan, A. B.; Blyth, C. C.; Glass, K.; Moore, H. C.

2026-03-04 epidemiology 10.64898/2026.03.02.26347477 medRxiv
Top 0.1%
23.7%
Show abstract

BackgroundTwo RSV immunisations products: a maternal vaccine, Abrysvo, and a long-acting monoclonal antibody, nirsevimab, both designed to prevent RSV illness in infants, have recently become available. Modelling evidence is required to inform how to optimally use these products in immunisation programs to reduce the burden of RSV in young children. MethodsWe extend a dynamic transmission model calibrated to RSV-hospitalisation data of children aged < 5 years in temperate Western Australia (WA) to simulate a range of potential RSV immunisation programs. Using our model, we estimate the impact of both single-product and hybrid RSV immunisation programs. The analysis considers timing of administration, coverage levels and targeting of high-risk groups. Impact on RSV burden is analysed in the context of the WA setting and the possible significant cost differences between the two products. ResultsAll programs analysed were effective in reducing RSV burden. Programs using nirsevimab for newborn infants at similar coverage levels to the Abrysvo programs, averted more RSV-hospitalisations annually. Seasonal programs that focused on protection during high RSV activity and programs targeting high-risk infants were the most efficient in reducing RSV burden. When dose cost is considered alongside program impact on RSV burden, we find evidence to support further economic analysis of hybrid programs as they could mitigate the cost differential between the two products while remaining highly effective in reducing RSV burden. ConclusionsOur study is the first to comprehensively analyse hybrid RSV immunisation programs that use Abrysvo and nirsevimab. RSV immunisation programs can substantially reduce the burden of RSV in young children. Our modelling analysis provides evidence on immunisation type, timing, coverage, high-risk groups and dosage cost that will support decision makers and can be used in economic evaluations.

5
Respiratory Viral Contribution to Acute Myocardial Infarction: A Time Series and Spatiotemporal Analysis in Victoria, Australia 2010-2022

Nguyen, T. Q.; SnotWatch Collaboration Group, ; Zhao, E.; Weinman, A. L.; Atkins, B. D.; Spelman, T.; Mavoa, S.; Clothier, H. J.; M. Reid, C.; Buttery, J. P.

2026-02-03 cardiovascular medicine 10.64898/2026.01.30.26345252 medRxiv
Top 0.1%
22.8%
Show abstract

BACKGROUNDRespiratory viral infections can trigger acute myocardial infarction (AMI). However, the proportion of AMI events attributable to viral respiratory pathogens is unclear. METHODSThis ecological study used time-series and spatiotemporal analyses to examine population-level patterns in Victoria, Australia, from 2010 to 2022. Independent statewide admissions and laboratory datasets were obtained. Generalized additive modelling was used to analyze the temporal association between respiratory viral circulation captured by polymerase chain reaction (PCR) testing and weekly counts of AMI admissions. A Bayesian hierarchical model was used to explore spatiotemporal variation in AMI associated with respiratory viruses. RESULTSOur study included 164 283 AMI hospital admissions and 6 180 896 PCR-tested samples. An increase in any respiratory virus detection rate was significantly associated with an increased incidence of AMI (incidence rate ratio [IRR] 1.0041; 95% confidence interval 1.0015-1.0067), after adjusting for seasonality, cold temperature, and fine particulate matter air pollution. An estimated 8.7% of total AMI events may be attributable to respiratory viral triggers, constituting an average annual incidence of 16.2 per 100,000 population. Significant associations were found with specific respiratory viruses; the fractions of AMI attributable to enterovirus, influenza, and respiratory syncytial virus were 5.2%, 1.5%, and 0.9%, respectively, with figures increased during peak viral seasons. Spatiotemporal analysis revealed that the association was more pronounced in outer-metropolitan areas. CONCLUSIONSRespiratory viral triggers contribute to the incidence of AMI. Population-level infection prevention strategies, such as vaccination, may reduce the impact of respiratory viral outbreaks during peak seasons. CLINICAL PERSPECTIVEO_ST_ABSWhat Is New?C_ST_ABSO_LIUsing time-series analysis and modern spatiotemporal techniques, we analyzed data from Victoria, Australia, to model population-level associations between AMI and respiratory viral activity and found that a recent laboratory-confirmed respiratory viral infection is associated with a higher incidence of AMI. C_LIO_LIAn estimated 8.7% of total AMI events may be attributable to respiratory viral triggers, constituting an average annual incidence of 16.2 per 100,000 population. C_LI What Are the Clinical Implications?O_LISome respiratory viral infections temporarily increase the risk of acute MI. C_LIO_LIWith the existing vaccines available against influenza and respiratory syncytial virus (RSV), public health policy actions for influenza and RSV vaccination, particularly in high-growth urban areas, may help reduce the acute cardiovascular burden and health system strain. C_LI

6
Early Population-Level Impact of Helicobacter pylori Eradication on Gastric Cancer Deaths in Japan: A Counterfactual Analysis of Short-Term Divergence

Kowada, A.

2026-02-26 epidemiology 10.64898/2026.02.24.26346975 medRxiv
Top 0.1%
19.3%
Show abstract

BackgroundHelicobacter pylori infection accounts for 98% of gastric cancer (GC) cases in Japan. Since 2013, the nationwide expansion of H. pylori eradication therapy to chronic gastritis patients has created a unique opportunity to evaluate its population-level impact on GC primary prevention. However, short-term reductions in GC deaths are difficult to interpret given the long natural history of gastric carcinogenesis. This study aimed to assess the early impact of population-level eradication on GC deaths. MethodsWe applied a two-layer analytic framework consisting of a counterfactual analysis comparing observed GC deaths during 2013-2021 with expected GC deaths had eradication uptake remained at pre-2013 levels. This was combined with a structured, time-dependent, multilayer state-transition model to estimate GC deaths prevented by eradication using GC incidence integrated with age-dependent H. pylori prevalence. ResultsObserved GC deaths declined from 48,632 in 2013 to 41,624 in 2021, whereas counterfactual GC deaths declined more modestly, from 49,794 to 45,654. The divergence between observed and counterfactual GC deaths widened steadily from 1,162 in 2013 to 4,030 in 2021. Model-based estimates indicated that eradication prevented 1,427 GC deaths during 2013-2021, with annual GC deaths prevented increasing from 17 in 2015 to 417 in 2021, particularly among adults aged 50-79. ConclusionsThis study demonstrates that H. pylori eradication has already contributed to a 10.4% reduction in GC deaths in Japan by 2021, with annual expansion of primary prevention effects. This framework supports evidence-based evaluation of short-term reductions in GC deaths attributable to H. pylori eradication in high-prevalence settings.

7
Threshold Effects of Rehabilitation Intensity on Functional Recovery After Ischaemic Stroke: A Panel Threshold Regression Analysis of Australian Hospital Data

Lim, A.; Venkataraman, P.

2026-03-12 health economics 10.64898/2026.03.11.26348201 medRxiv
Top 0.1%
19.0%
Show abstract

BackgroundOptimal rehabilitation dosing after ischaemic stroke remains contested. Linear assumptions underlying conventional regression models may mask clinically important threshold effects, whereby functional gains accelerate or plateau beyond specific intensity thresholds. This study applied panel threshold regression to Australian hospital administrative data to identify endogenous breakpoints in the dose-response relationship between rehabilitation intensity and functional recovery. MethodsWe used a retrospective longitudinal cohort derived from the Australian Stroke Clinical Registry (AuSCR) and the National Hospital Cost Data Collection (NHCDC) for fiscal years 2018-2019 to 2022-2023. The analytical sample comprised 18,742 hospitalised ischaemic stroke patients across 48 public hospitals in five Australian states. The primary exposure was daily rehabilitation intensity (minutes of physiotherapy, occupational therapy, and speech pathology per inpatient day). The primary outcome was change in the modified Rankin Scale (mRS) score from admission to discharge. We employed Hansens (1999) panel threshold regression framework to test for single, double, and triple threshold effects, using bootstrap p-values (n=500) to establish statistical significance. Fixed-effects estimation controlled for unobserved hospital heterogeneity. Secondary outcomes included acute length of stay and discharge destination. Cost-related parameters were benchmarked against published Australian cost-effectiveness data. ResultsThe panel threshold model identified two statistically significant breakpoints in the intensity-recovery relationship (p<0.001 for both). Below the first threshold (27.4 minutes/day; 95% CI: 24.8-29.6), each additional minute of daily rehabilitation was associated with a 0.008-point reduction in mRS score (beta = -0.008, 95% CI: -0.011 to -0.005, p<0.001). Between the two thresholds (27.4 to 54.7 minutes/day; 95% CI: 51.2-58.9), the marginal benefit approximately doubled (beta = -0.018, 95% CI: -0.022 to -0.013, p<0.001). Above the upper threshold (>54.7 minutes/day), the marginal effect diminished substantially (beta = -0.004, 95% CI: -0.009 to 0.002, p=0.186), suggesting a ceiling effect. These dose-response patterns were consistent across age subgroups, stroke severity strata, and hospital volume tertiles. ConclusionsRehabilitation intensity thresholds exist in stroke inpatient recovery and are non-linear. Patients receiving between 27 and 55 minutes of daily multidisciplinary therapy derive disproportionate functional benefit per unit of resource investment. Scheduling rehabilitation below the lower threshold represents a clinically and economically suboptimal allocation of inpatient resources. These findings have direct implications for workforce planning, clinical pathway design, and value-based commissioning in Australian public hospitals.

8
Modelling mass asymptomatic testing strategies for early containment of infectious disease outbreaks in prisons

Brooks, J. T.; Pellis, L.; Scarabel, F.; Xu, J. T.; Bakker, P.; Hall, I.; Adamson, J.; Bailie, R.; Campbell, R.; Dennis, N.; Straus, L.; Willner, S.; Van Der Veen, J.; Edge, C.; Fowler, T.

2026-03-14 public and global health 10.64898/2026.03.13.26348273 medRxiv
Top 0.1%
18.7%
Show abstract

ObjectiveInvestigate a strategy of mass asymptomatic testing and isolation ("pulse testing") aimed at early containment of outbreaks in prisons in comparison to or combination with a symptom-based isolation strategy. MethodsSimulations using an individual-based time-since-infection model were run under different pathogen and intervention strategy scenarios. Measured outcomes were the proportion of outbreaks contained and number of individuals isolated. ResultsFor R0 = 2, 25% probability of being asymptomatic (pa = 0.25), a COVID-19-like infection dynamics and perfect adherence, one pulse test contained approximately 20% of outbreaks, and three tests up to 50%. With no asymptomatic cases, three tests performed similarly to isolating cases one day after symptoms ({approx} 55% outbreaks contained), but symptom-based isolation degraded significantly faster than pulse testing with increasing pa. With perfect adherence, combining both interventions contained between {approx} 25% (R0 = 3, pa = 0.5) and > 90% (R0 = 1.5, pa = 0) of outbreaks. Across all scenarios, pulse testing isolated substantially fewer individuals than symptom-based isolation, e.g. {approx} 5% versus {approx} 30% for R0 = 2 and pa = 0.25. ConclusionIf implemented promptly upon outbreak declaration and with high adherence, pulse testing may stop outbreaks early, substantially reducing the number of isolations and mitigating the impact on prison regime and resident/staff wellbeing. However, for large R0 or delayed implementation, effectiveness drops rapidly.

9
Effective Implementation of Medicines Shortage Policy: Evidence from Australias Serious Scarcity Substitution Instruments

Janetzki, J.; Kalisch Ellett, L.; Pratt, N.; Kemp-Casey, A.

2026-02-04 epidemiology 10.64898/2026.02.02.26345406 medRxiv
Top 0.1%
18.3%
Show abstract

BackgroundMedication shortages are a considerable and ongoing issue in healthcare, disrupting consumer access. Since 2021, Australias national medicines regulator has issued Serious Scarcity Substitution Instruments (SSSIs), allowing pharmacists to substitute a specific therapeutically equivalent strength and/or formulation of a medicine without prior approval from a prescriber. The impact of SSSIs on utilisation of medicines has not been investigated. ObjectiveDetermine whether SSSIs are effective in addressing medicine shortages and meeting patient need. MethodsThis retrospective cohort study used aggregated pharmacy claims to examine the utilisation of 12 medicines which had an SSSI. We calculated the percentage change in defined daily doses dispensed per 1000 population per day in the 11 months after SSSI implementation, compared with the previous two years. A percentage change of less than 20% was used to indicate success. ResultsFollowing product shortages, utilisation fell for 10 of the 12 medicines examined. For eight of these medicines (amoxicillin, cefalexin, estradiol, fluoxetine, insulin degludec with insulin aspart, isosorbide mononitrate, vigabatrin, and warfarin) decreases in utilisation were minimised to <20%. On average, SSSIs where all permitted substitute products were scarce (e.g. abatacept) were associated with larger decreases in use (between -22% and -68%) than those for which none or only some of the substitutes were in shortage (between -45% and +7%, respectively). ConclusionsWhile product shortages led to decreases in medicines consumption, SSSIs appeared to be successful in limiting decreases. However, SSSIs were less likely to be successful when many of the permitted substitute products were also scarce. Key pointsO_LIThis study is the first to evaluate the effectiveness of Australias Serious Scarcity Substitution Instruments (SSSIs) in mitigating medicine shortages using national dispensing data and interrupted time series analysis. C_LIO_LITwo-thirds of SSSIs successfully limited utilisation declines to less than 20%, with effectiveness strongly linked to the availability of substitute products. C_LIO_LIBy demonstrating variable utilisation outcomes across medicines, this study adds empirical evidence to international debates on substitution policies, suggesting that nationally standardised frameworks like Australias SSSIs may function best when supported by robust supply intelligence. C_LIO_LISSSIs are a valuable policy tool for maintaining continuity of care during shortages, but timely implementation and ensuring substitute supply are critical for optimal impact. C_LI

10
Practical alcohol risk-reduction advice plus a brief commitment declaration in a social drinking laboratory: a pilot cluster randomized trial

Yoshimoto, H.; Hadano, T.; Shimada, K.; Gosho, M.; Fukuda, T.; Komano, Y.; Umeda, K.; Iwase, M.; Kusano, Y.; Kawabata, T.

2026-04-21 public and global health 10.64898/2026.04.19.26351067 medRxiv
Top 0.1%
17.8%
Show abstract

BackgroundPractical alcohol risk-reduction strategies are widely recommended in public-facing alcohol guidance, but randomized evidence from socially interactive drinking episodes remains limited. We conducted a pilot cluster randomized trial to evaluate the feasibility and preliminary effects of a package intervention comprising practical drinking-strategy information, participant self-selection of same-day strategies, and a brief commitment declaration in a social drinking laboratory. MethodsThis single-center, parallel-group pilot trial was conducted in Japan. Pre-existing social groups participated. One or two groups scheduled in the same session slot were combined into a time-slot allocation unit, which was randomized 1:1 either to the package intervention or to alcohol-related knowledge only. The primary outcome was total pure alcohol intake during the first 120 min. Session satisfaction on a Visual Analog Scale (VAS) was a prespecified secondary participant-experience outcome. ResultsOf 83 interested individuals, 63 were randomized and 59 participants in 17 social groups and 12 allocation units were included in the modified intention-to-treat analysis. The mean paired intervention-control difference for 120-min alcohol intake was-8.84 g (95% confidence interval [CI]-27.92 to 10.23; exact sign-flip p = 0.281). The corresponding exploratory 0-30 min difference was-4.90 g (95% CI-10.48 to 0.68; exact sign-flip p = 0.094). In a genotype-adjusted participant-level sensitivity analysis, the intervention coefficient for 120-min intake was-16.0 g (95% CI-30.9 to-1.1; p = 0.036). Session satisfaction was high in both arms with no clear between-arm difference. Next-day follow-up was 100%, and no adverse-event-related discontinuations occurred. ConclusionsThe intervention was feasible to deliver in a socially interactive drinking setting, and session satisfaction was high in both arms. Primary allocation-unit estimates favored lower alcohol intake but were imprecise. Larger trials are needed to estimate effects more precisely, while considering the potential influence of genotype imbalance on effect estimation in East Asian samples. Trial registrationUniversity Hospital Medical Information Network Clinical Trials Registry (UMIN-CTR) UMIN000060685. Registered 17 February 2026.

11
Comparative Analysis of Health Care Use and Costs for Orthobiologic versus Surgical Treatments in Economically High-Impact Knee Conditions

Lentz, T. A.; Burrows, J.; Brucker, A.; Wong, A. I.; Qualls, L.; Divakaran, R.; Centeno, C.; Suther, T.; Thomas, L.

2026-03-02 orthopedics 10.64898/2026.02.27.26347270 medRxiv
Top 0.1%
17.6%
Show abstract

BackgroundTotal knee arthroplasty (TKA), partial knee arthroplasty (pKA), and arthroscopic meniscectomy are among the most commonly performed procedures for knee osteoarthritis and degenerative meniscal tears in the United States, yet concerns persist regarding overuse, variable clinical benefit, and high costs. Orthobiologic treatments, including platelet-rich plasma (PRP) and bone marrow aspirate concentrate (BMAC), have emerged as less invasive alternatives, but downstream health care resource use (HCRU) and costs associated with these treatments relative to surgery are not well established. MethodsWe conducted a retrospective, observational cohort study using linked commercial insurance claims data and a national orthobiologic treatment registry to compare downstream HCRU and costs following orthobiologic versus surgical treatment of knee conditions. Two comparisons were evaluated separately: (1) PRP versus arthroscopic meniscectomy among patients with degenerative meniscal pathology and minimal osteoarthritis, and (2) BMAC with or without PRP versus TKA or pKA among patients with knee osteoarthritis. Eligible procedures occurred between 2016 and 2023. Propensity score matching was used to balance demographic and clinical confounders. Co-primary outcomes were total health care costs at 12 and 24 months post-procedure, with exploratory analyses at 36 and 48 months. Costs were estimated using multiple approaches, including Medicare-based estimates, commercial payer estimates, and aggregate allowed amounts. HCRU outcomes included outpatient visits, physical therapy, imaging, opioid use, repeat injections, and subsequent surgery. ResultsAfter matching, analyses included 167 PRP-treated patients matched to 1,670 meniscectomy patients and 165 BMAC/PRP-treated patients matched to 1,650 TKA/pKA patients, with good balance across pre-specified confounders. Progression to subsequent surgery after orthobiologic treatment was rare at 12 and 24 months in both cohorts. Compared with TKA/pKA, BMAC/PRP was associated with lower overall health care use for several services, including outpatient visits, physical therapy, knee radiographs, and opioid prescriptions, although magnetic resonance imaging was more frequent following orthobiologic treatment. Total costs at 12 and 24 months were consistently higher for TKA/pKA than for BMAC/PRP across all costing methods. In the PRP versus meniscectomy comparison, differences in health care use were modest, and costs were similar or lower for PRP depending on the costing approach. Exploratory analyses through 48 months showed similar patterns, with persistently low rates of subsequent surgery after orthobiologic treatment and generally higher cumulative costs following surgical intervention. ConclusionsIn this real-world, propensity-matched analysis of commercially insured patients, orthobiologic treatments with PRP or BMAC were associated with similar or lower downstream health care costs compared with commonly performed surgical alternatives for selected patients with degenerative meniscal tears or knee osteoarthritis. Progression to surgery following orthobiologic treatment was uncommon through two years and remained low in longer-term exploratory analyses. These findings support the consideration of orthobiologic therapies as potentially lower-cost alternatives to surgery for appropriately selected patients and may inform shared decision-making and payer policy.

12
Gestational Environment Captured by the Neonatal Metabolome is not Predictive of Later Inflammatory Bowel Disease

Fracchia, A.; Rudbaek, J. J.; Chakradeo, K.; Jess, T.; Ottosson, F.; Sazonovs, A.

2026-02-18 gastroenterology 10.64898/2026.02.18.26346468 medRxiv
Top 0.1%
17.3%
Show abstract

BackgroundGestational exposures may contribute to the newborns lifetime risk of inflammatory bowel disease (IBD). While gestational influences are associated with IBD onset, the causality and confounding of such exposures are difficult to ascertain. The neonatal metabolome provides a metabolic snapshot of gestational influences. ObjectiveWe tested the neonatal metabolomes ability to predict future IBD, to assess whether gestational exposures are reflected in early molecular precursors of the disease. MethodsWe profiled dried blood spots from 520 newborns who later developed IBD and matched controls using high-resolution untargeted mass spectrometry metabolomics (1,350 QC-passing metabolites). Genotyping was available for 1,009 of these individuals. PERMANOVA confirmed assay sensitivity to gestational exposures, gradient boosting was used for prediction. ResultsThe neonatal metabolome significantly captured maternal smoking, birth weight, and gestational age (p < 0.001), but explained minimal variance in IBD status (R2 = 0.09%, p = 0.390) and showed no predictive power for IBD (AUC = 0.51, 95% CI 0.50-0.52, p = 0.585). Stratifying by disease subtype and age of onset did not improve performance. In contrast, genetic risk scores were modestly predictive (CD: AUC = 0.64, p < 5.11x10-14; UC: AUC = 0.63, p < 7.65x10-{superscript 1}{superscript 2}), but uncorrelated with neonatal metabolomic profiles (CD: p = 0.650; UC: p = 0.970), suggesting a later-age effect. ConclusionsUsing a large, comprehensively profiled cohort, we demonstrate that neonatal metabolomic profiles sensitively capture gestational signatures, but not the overall future IBD risk. Our findings suggest that most IBD risk accumulates later in life, beyond gestational molecular imprints.

13
Eligibility Without Equity: Rethinking Age-Based Adult Vaccine Policies

Amin, M. S.; Collins, B.; Beavis, C.; Sigafoos, J.; French, N.; Hungerford, D.

2026-02-18 public and global health 10.64898/2026.02.17.26346473 medRxiv
Top 0.1%
17.3%
Show abstract

Embedding equity into vaccine eligibility is essential for reducing health inequalities. Yet, adult vaccine eligibility in most European countries is primarily based on fixed age thresholds, prioritising cost-effectiveness. This approach risks excluding the most vulnerable populations living in deprived communities with poorer health and shorter survival into older age. Extending eligibility based on clinical risk partially addresses this gap. Higher rates of underdiagnosis and delayed diagnosis in deprived populations limit the fairness of this approach, however, with the status quo of adult vaccine eligibility criteria likely doing active harm. In this perspective, we demonstrate the extent of this inequity in England. For example, the average male living in Hyde Park in the northern city of Leeds dies 9.5 years too early to ever receive the RSV vaccine offer at age 75. Meanwhile, a male living in Hyde Park, London, lives much longer and may receive the benefits of the RSV vaccine for 10 years or more. Drawing on lessons from the COVID-19 pandemic, we propose further evaluation of alternative eligibility models that incorporate local place-based disadvantage, which will inherently account for life expectancy and deprivation levels. These models will ensure earlier access to vaccines for communities with the greatest need and improve health equity without overwhelming health systems.

14
Risk Factor-Based Metabolomic Profiling Reveals Plasma Biomarkers of Hepatobiliary Cancer

Boekstegers, F. J.; Viallon, V.; Breeur, M.; Voican, C.; Perlemutter, G.; Chatziioannou, C.; Keski-Rahkonen, P.; Scherer, D.; Jenab, M.; Lorenzo Bermejo, J.

2026-03-10 gastroenterology 10.64898/2026.03.09.26347912 medRxiv
Top 0.1%
17.1%
Show abstract

Background and AimsHighly aggressive hepatobiliary tumours include gallbladder cancer (GBC), hepatocellular carcinoma (HCC), intrahepatic and extrahepatic cholangiocarcinoma (iCCA, eCCA) and ampulla of Vater cancer (AoV). We aimed to identify plasma biomarkers for the early diagnosis of hepatobiliary cancer by leveraging the metabolomic signatures of established clinical risk factors. MethodBased on 273,190 participants from the UK Biobank, we (1) identified metabolites associated with gallstone-related conditions (e.g. cholecystitis), primary sclerosing cholangitis (PSC) and metabolic liver diseases (e.g. cirrhosis), and (2) evaluated the relationship between the identified metabolites and the risk of GBC, HCC, iCCA, eCCA and AoV. Findings were validated in an independent group of 227,809 participants from the UK Biobank. We also derived metabolomic scores summarizing the three risk-factor signatures and evaluated their ability to stratify cancer risk. ResultsWe identified 27 metabolites associated with gallstone-related conditions, 11 with PSC, and 34 with metabolic liver diseases, some of which showed associations with inconsistent directions across risk factors, suggesting distinct pathogenic processes. Several metabolites were associated with cancer risk in both the discovery and validation datasets, independently of established risk factors, predominantly for HCC (16 signals) and for iCCA (4), with one for GBC and none for eCCA and AoV. Metabolomic scores clearly distinguished individuals at high risk for HCC and iCCA. ConclusionThe preselection of plasma metabolites associated with established risk factors facilitated the subsequent identification and validation of biomarkers for early cancer detection. The identified metabolites suggest specific pathogenic pathways for each type of hepatobiliary cancer. Wider replication is urgently needed to advance toward clinical implementation. What you need to knowO_ST_ABSBACKGROUND AND CONTEXTC_ST_ABSClinical risk factors for hepatobiliary cancers often progress silently, making early identification of high-risk individuals difficult and highlighting the need for biological markers detectable before clinical diagnosis. NEW FINDINGSRisk-factor-based serum metabolomic profiling identified circulating metabolites that predict specific hepatobiliary cancers years before diagnosis, with strongest and most consistent signals for hepatocellular and intrahepatic cholangiocarcinoma. LIMITATIONSClinical risk factors were assumed to be frequently underdiagnosed in UK Biobank, and event numbers were relatively small for some cancers, which may have reduced power and attenuated associations for less common endpoints. CLINICAL RESEARCH RELEVANCEThis study shows that serum metabolic profiles can identify individuals at increased risk for hepatobiliary cancers long before symptoms appear, particularly for hepatocellular and intrahepatic cholangiocarcinoma. These findings support the development of precision risk-stratification strategies that may ultimately enable earlier surveillance. BASIC RESEARCH RELEVANCEBy first identifying metabolites linked to specific liver and biliary clinical conditions, the study clarifies which metabolites are indirectly associated with hepatobiliary cancers through known disease pathways. Testing these metabolites again while adjusting for diagnoses of those conditions then reveals which ones also show direct, pathway-independent associations with individual hepatobiliary cancers, providing clearer insight into cancer-specific metabolic mechanisms.

15
Alcohol, Cardiovascular Disease, and Industry Influence: A Meta-review

Golder, S.; Lau, O.; Hartwell, G.; Blanchard, L.; Gibson, A.; Crookes, C.; Foster Davies, L.; Glover, R.

2026-03-20 cardiovascular medicine 10.64898/2026.03.18.26348685 medRxiv
Top 0.1%
15.0%
Show abstract

Objectives: This meta-review compares methodological and reporting approaches between systematic reviews examining alcohol dose and cardiovascular disease (CVD) and assesses whether alcohol industry involvement is associated with divergent conclusions. Methods: KSR Evidence was searched 6 May 2025 to update a cohort of 60 systematic reviews from previous review. Reviews were included if they examined any dose-response relationship between alcohol consumption and CVD. Two reviewers independently screened records and extracted data on review characteristics, and citations. Methodological quality was appraised using AMSTAR 2. For a matched sample of reviews with and without known alcohol industry funding, the overlap of included primary studies was compared using Corrected Covered Area (CCA) analysis. Results: Thirty additional systematic reviews met the inclusion criteria, yielding 90 systematic reviews (1996-2025). Most (60.0%, 54/90) concluded that alcohol had a cardioprotective effect, whereas 31.1% (28/90) concluded no evidence of protection, and 8.9% (8/90) were inconclusive. Twenty reviews (22.0%) had declared or inferred alcohol industry funding or author connection; all but one reported a protective effect at lower doses, the other was inconclusive. Industry-connected reviews were cited more often (mean 575.9 vs 193.0, p=0.0002) and more commonly examined overall CVD rather than specific conditions (such as hypertension or stroke). Study overlap was low (CCA 2.59%) and 99% of reviews were rated as critically low quality. Conclusions: The fragmented evidence base is of poor methodological quality with selective inclusion of studies. Alcohol industry connections are strongly associated, with conclusions favouring alcohol consumption, highlighting the need for independent high-quality systematic reviews.

16
Cross-cohort insights into the association of handgrip strength transitions and burdens with cardiovascular disease risk

Lin, H.; zeng, p.

2026-03-09 cardiovascular medicine 10.64898/2026.03.05.26347756 medRxiv
Top 0.1%
14.9%
Show abstract

BackgroundPrevious studies established handgrip strength (HGS) as a potential risk factor for cardiovascular diseases (CVD). However, existing studies focused exclusively on baseline HGS and neglected longitudinal changes in HGS during follow-up. Thus, our aim was to investigate the associations of transitions and dynamic burdens of HGS with incident CVD risk. MethodsWe analyzed data from the UK Biobank (UKB), the China Health and Retirement Longitudinal Study (CHARLS), the Survey of Health, Ageing and Retirement in Europe (SHARE), and the Korean Longitudinal Study of Ageing (KLOSA). We defined HGS transitions based on HGS at baseline and the first follow-up, and created three indicators to reflect HGS burdens. Cox models were applied to examine the association of HGS transitions and burdens with incident CVD risk. The predictive value of HGS indices was also evaluated. ResultsA total of 73,555 participants were retained, and 4,722 (6.4%) incident CVD cases were identified during follow-up. Transition analyses revealed that increased HGS during follow-up was associated with reduced CVD risk, whereas decreased HGS was associated with an elevated risk. Per standard deviation decrement in HGS slope, cumulative HGS and relative cumulative HGS led to a 19.8% (95%CI 1.5~41.5%), 44.0% (95%CI 10.8~87.2%) or 26.7% (95%CI 9.4~46.8%) elevated risk of CVD, respectively. These associations were independent of and more pronounced than HGS, with stronger effects observed in East Asian cohorts (CHARLS and KLOSA) compared to European cohorts (UKB and SHARE). Incorporating dynamic HGS metrics enhanced the predictive accuracy, with HGS burdens providing greater gains than HGS. For the optimal models, all HGS indices resulted in an increase of AUC up to 7.6% in Europeans and 5.9% in East Asians. ConclusionsHGS burdens outperformed in predicting cardiovascular health compared to single cross-sectional HGS itself, suggesting the clinical utility of longitudinal HGS monitoring in clinical and public health strategies for CVD prevention.

17
Standardized country-level delivery unit cost estimates for routine childhood, routine adolescent, and campaign vaccination: an updated modeling analysis

Portnoy, A.; Clarke-Deelder, E.; Holroyd, T. A.; Hogan, D. R.; Mengistu, T.; Menzies, N. A.

2026-03-17 public and global health 10.64898/2026.03.15.26348434 medRxiv
Top 0.1%
14.8%
Show abstract

BackgroundReliable estimates of immunization delivery costs are essential for planning, budgeting, and economic evaluation of vaccination programs. Although a number of empirical costing studies have been conducted in recent years, many low- and middle-income countries (LMICs) continue to lack up-to-date, accurate delivery unit cost estimates--particularly for adolescent and campaign vaccination strategies. Building on prior work, this study aimed to produce updated, standardized country-level estimates of immunization delivery unit costs for different vaccination modalities across all LMICs. MethodsUsing data on study-level unit cost estimates reported by empirical costing studies in the 2024 update of the Immunization Delivery Cost Catalogue, we fitted Bayesian meta-regression models predicting per-dose delivery costs for routine childhood vaccination, routine adolescent vaccination (using human papillomavirus vaccination as a proxy), and vaccination delivered via mass campaigns. Regression models incorporated country-level covariates (per-capita gross domestic product, population size, third-dose diphtheria-tetanus-pertussis vaccination coverage, urbanization, population density, and under-five mortality) and study-level characteristics (cost category, financial versus economic costing perspective, and full versus incremental costing methodology). Fitted models were used to generate country-specific and population-weighted average economic and financial cost per dose estimates for 2024, in 2024 US dollars. ResultsThe analysis included 142 observations for routine childhood vaccination, 63 observations for routine adolescent vaccination, and 113 observations for campaign vaccination. For 2024, the population-weighted mean economic cost per dose across all LMICs was estimated at $5.86 (95% uncertainty interval: $2.74-13.43) for routine childhood vaccination, $17.65 ($7.76-44.30) for routine adolescent vaccination, and $3.13 ($2.03-4.78) for campaign vaccination. Corresponding financial costs per dose were $3.02 ($1.52-6.29), $10.08 ($4.15-25.01), and $1.79 ($1.11-2.91), respectively. Substantial heterogeneity in delivery costs was observed across countries, delivery modalities, and cost perspectives. The estimated associations between predictors and unit costs may be influenced by unobserved study characteristics, and therefore should be interpreted as correlational rather than causal. ConclusionsBy leveraging an expanded empirical evidence base and a Bayesian meta-regression framework, this study provides updated per-dose delivery costs for routine childhood vaccination and estimates new per-dose delivery costs for routine adolescent vaccination and campaign vaccination. As policy decisions often must be made despite incomplete information, these estimates provide a practical source of evidence to support analyses when direct cost data are unavailable.

18
Model-Based Evaluation of Colorectal Cancer Screening Effectiveness: Three Rounds of Multitarget Stool DNA Testing Versus One Colonoscopy

Dore, M.; Ebner, D. W.; Vahdat, V.; Estes, C.; Ozbay, A. B.; Foster, V.; Limburg, P. J.

2026-03-03 gastroenterology 10.64898/2026.01.30.26344467 medRxiv
Top 0.1%
14.7%
Show abstract

BackgroundSeveral colorectal cancer (CRC) screening modalities are guideline-recommended in the United States, yet they vary considerably in screening interval and real-world adherence. As a result, single-round test performance may not reflect cumulative effectiveness over time. This study compared the 10-year longitudinal outcomes of two CRC screening strategies-- triennial next-generation multitarget stool DNA testing (ng mt-sDNA) and decennial screening colonoscopy. MethodsThis study used the validated, microsimulation-based Colorectal Cancer and Adenoma Incidence and Mortality (CRC-AIM) model to estimate 10-year cumulative outcomes for two guideline-recommended screening strategies: triennial ng mt-sDNA and decennial colonoscopy. Model inputs included test-specific performance and real-world adherence. Outcomes included CRC and precancerous lesions detected, CRC mortality reductions, and life-years gained (LYG). Sensitivity analyses examined the effects of varying both screening adherence and follow-up colonoscopy adherence. ResultsOver 10 years per 1,000 individuals offered screening, the ng mt-sDNA screening test detected 13% more precancerous lesions and 11% more CRC cases than colonoscopy, with a greater proportion of CRCs identified through screening rather than symptomatic detection. ng mt-sDNA resulted in greater CRC mortality reduction (33% vs 20%) and 62% more life-years gained, with consistent findings across sensitivity analyses. ConclusionsWith real-world adherence, screening with triennial ng mt-sDNA demonstrates superior cumulative effectiveness compared with decennial colonoscopy, driven by higher adherence and favorable longitudinal performance. These findings support the expanded use of noninvasive stool-based screening to reduce CRC mortality and alleviate capacity constraints associated with colonoscopy-based screening. Broader adoption of ng mt-sDNA may enhance population-level CRC prevention by increasing participation and improving early detection across the screening eligible population. Plain language summaryColorectal cancer screening tests are recommended at different time intervals and completed at different adherence rates in clinical practice. The analysis used a validated simulation model to compare 10-year outcomes of triennial next-generation multi-target stool DNA test (ng mt-sDNA) with a single colonoscopy, accounting for real-world screening and follow-up colonoscopy adherence. Our findings indicate that repeated ng mt-sDNA provides greater cumulative screening effectiveness than colonoscopy over a 10-year period.

19
A cross-sectional examination of immune adaptations during pregnancy in the ECHO Cohort

Banker, S. M.; Shapiro-Thompson, R.; Sinsel, S.; Ghassabian, A.; Douglas, C.; Nelson, M. E.; Peterson, L. A.; Thyagarajan, B.; Morales, S.; Hockett, C. W.; Elliot, A. J.; Giamberardino, S. N.; Shuffrey, L. C.; The ECHO Cohort Consortium,

2026-02-07 immunology 10.64898/2026.02.04.703821 medRxiv
Top 0.1%
14.6%
Show abstract

BackgroundPregnancy requires finely tuned immune changes that support implantation, placental development, maternal-fetal tolerance, and preparation for labor, yet the normative trajectories of circulating inflammatory proteins across gestation remain poorly defined. This cross-sectional study investigates how circulating inflammatory proteins vary with gestational age in pregnancy and examines the impacts of fundamental biological characteristics, such as gravidity and fetal sex. MethodsData were drawn from 1154 pregnant individuals from six study sites of the National Institutes of Health Environmental influences on Child Health Outcomes (ECHO) Cohort. We used Olink high-throughput proteomic profiling to map cross-sectional associations between protein expression levels and gestational age at blood draw using linear, spline-based, and generalized additive modeling approaches. ResultsGeneralized additive models provided the best fit, revealing that immune changes across pregnancy were predominantly nonlinear. Sixty-one proteins showed significant associations with gestational age, with many exhibiting shared inflection points that aligned with major physiological transitions. A small subset of proteins also showed evidence of modification by fetal and maternal characteristics. CD244 displayed different gestational patterns by fetal sex, while CST5 and SIRT2 showed varied gestational associations by maternal gravidity. ConclusionThe findings highlight pregnancy as a sequence of coordinated immune transitions rather than a simple linear shift and provide one of the most detailed characterizations to date of circulating inflammatory protein dynamics across human gestation. Establishing these normative trajectories offers a crucial reference for detecting early deviations that may signal risk for pregnancy complications and for identifying biomarkers in maternal and fetal health research.

20
AI is Smart. Is it Wise? Quantifying the Effect of Patient-Choice (β) on Physical Outcomes

Gurel, O.; Rasmussen, M. F.; Veginati, V.; Weinstein, J. N.

2026-03-12 orthopedics 10.64898/2026.03.10.26348069 medRxiv
Top 0.1%
14.4%
Show abstract

Large language models (LLMs) increasingly guide clinical decisions through population-level evidence, yet they cannot encode individual patient preferences. When treatments yield comparable outcomes, patient choice may drive decisions, though its effect remains unquantified. The Spine Patient Outcomes Research Trial (SPORT)--marked by similar surgical and nonoperative results and substantial crossover--provided a natural experiment to use causal-inference methods to estimate unbiased treatment effects and quantify the contribution of patient choice to outcomes. Using only published aggregate results from SPORT, we conducted two-stage least squares instrumental-variable analysis using randomized treatment assignment as the instrument, with Complier Average Causal Effects (CACE) and E-values assessing sensitivity to unmeasured confounding. Primary outcomes were SF-36 Bodily Pain, SF-36 Physical Function scores, and the Oswestry Disability Index. We decomposed treatment effects into , the biological treatment mechanism, and {beta}, the patient-choice contribution. Aggregate estimates revealed G = 15.7 (0.5) and {beta}G = 7.4 (3.4), with the net difference between surgical and nonoperative treatment effects {Delta} {approx} 0.65. This analysis quantifies a measurable and significant effect of patient choice ({beta}) on physical outcomes. When treatment effects are comparable ({Delta} small), {beta}--a dimension inaccessible to current LLMs trained on -biased population-level evidence--emerges as the dominant driver of decision-making. These findings provide an empirical grounding for informed choice, clarify the limits of LLMs trained on -biased evidence, and quantify a structural constraint in AI-driven clinical decision support. Key messagesO_LIThe effect of patient choice ({beta}) on physical outcomes is real, measurable, and clinically meaningful. C_LIO_LI{beta} becomes the dominant driver of outcomes when biological treatment differences ({Delta}) are small. C_LIO_LILLMs cannot encode {beta} because they are trained on -biased population-level evidence. C_LIO_LIThese findings provide the empirical foundation for informed choice--not just informed consent. C_LI